Sunday, May 29, 2005

Uncertainty – A Growth Industry

There are few scientific challenges more complex than understanding the cause of disease in humans. Whether the data are from epidemiological investigations or studies with laboratory animals, extrapolation of the results to environmental exposure settings requires investigators to make causal inferences, a process that carries some uncertainty. “Manufactured uncertainty” has arisen as a term used to describe how industry-sponsored science casts doubt about our understanding of potential occupational or environmental health risks. The outcome is the call to delay or defer risk-reduction activities until further studies can be conducted, because there isn’t sufficiently compelling evidence that there is a health risk warranting action. Some perspectives on this issue can be found here, here and here.

Confined Space has written extensively, eloquently and angrily about the occupational health debacle involving the artificial butter flavor diacetyl, which has been responsible for producing debilitating fixed airway disease (including bronchioles obliterans) in food processing workers making packaged popcorn. His commentary pointed me to the essay by David Michaels and Celeste Monforton, at George Washington University School of Public Health and Health Services, which used the diacetyl story as an example of declining health and environmental regulatory effectiveness in the face of industry's influence on our regulatory and tort system. Manufactured uncertainty is one of the concepts discussed in the essay by Michaels and Montforton.

Following a review of “popcorn lung” and the discussion by Michaels and Montforton. Confined Space states:

Their conclusion is something that every American worker needs to understand: Our system of regulating and controlling workers' exposure to toxic chemicals is broken, kaput, dead.

The use of science in environmental and occupational health regulatory decision making these days may be slowing to glacial stillness due to the absence of a political mandate pushing regulatory agencies and regulated entities to work harder to protect workers and the public, and the influence of processes such as the Data Quality Act. Michaels and Montforton’s conclusion implies there were better times in the past but I’m not sure there ever was much of a “golden age” of environmental/occupational health and safety regulation. Diacetyl just seems to be another in a long list of cases where we found out about adverse health or environmental effects the hard way – such as thalidomide, DES, vinyl chloride, DBCP and MTBE.

John Applegate of Indiana University makes the point here that our risk assessment-based approach for regulating environmental and occupational exposures is enormously data-hungry because we chosen to require fairly precise estimates of health risks as a basis for making regulatory decisions. He articulates a market-based argument for why producers of chemicals substances should also be the suppliers of health effects data – internalizing the costs of the potential environmental health and safety consequences (including testing costs) encourages industry to avoid the increased costs by promoting product safety through the use of lower-toxicity alternatives, adequate warning information for users, or both.

According to Dr. Applegate, market failure creates a disincentive to better understand the risks from chemicals in commerce – knowing the risks creates regulatory and liability consequences, while remaining in ignorance correspondingly has almost no consequences. Normally, products with undesirable characteristics will not survive long in the marketplace. However, the undesirable characteristics of toxic chemicals are much harder for consumers to detect, defeating this feedback mechanism, in the absence of better consumer information. Rather than fixing the problem with the market, regulatory agencies (hence the public) have taken on the burden of proving that a chemical is unsafe, furthering the tendency of regulated entities to generate as little data as possible and challenging what studies exist. Hence, manufactured uncertainty.

Daniel Sarewitz of Arizona State University argues how science can make environmental controversies worse by giving competing groups their own set of facts, tied to their particular value systems, to argue over. Scientific uncertainty is less a matter of a lack of information but a lack of coherence among competing scientific understandings amplified by the various political, cultural and institutional viewpoints. He concludes:

. . . the value bases of disputes underlying environmental controversies must be fully articulated and adjudicated through political means before science can play an effective role in resolving environmental problems.

Clearly, something has happened with our ideas of how to make risk management decisions about chemical substances when there is uncertainty (which is all the time. . .). It has become altogether too easy to say “we don’t know enough to act”, particularly if you’re the one who will have to pay for cleanup or reducing risks. The reflex of calling for more data and more studies won’t overcome the uncertainty in decision making if, as Dr. Sarewitz argues, the ground rules and outcomes aren’t defined ahead of time. The reflex of “we need another study” just won’t cut it if you don’t have a framework for how to use the data.

For example, an NCI study published last year a study of Chinese workers exposed to benzene, concluding there were observable changes in white blood cells with exposures down to 1 ppm in air (the current PEL for benzene). The petroleum industry has funded a follow up study, to be completed in 2007, which industry representatives are saying today is expected to refute the findings from NCI’s study. What apparently hasn’t been defined, either by the industry or regulatory agencies, is how the weight of evidence from both studies will be used to inform the debate about occupational exposure limits for benzene. If industry’s study confirms NCI’s results, will there be a call for another study before decision-making can commence?

This is a loophole in the risk assessment/risk management paradigm, when you become obsessively focused on understanding the risks so that decision making is paralyzed. We were working in the 1990s on efforts to address this problem and make risk assessment/risk management a more useful process (see here, here and here), but those efforts apparently have been set aside. The Data Quality Act had consequences, possibly unintended by the sponsors, on science and risk assessment. The DQA was passed during the waning days of the Clinton Administration as a rider to a budget bill. There were no hearings on it, and the wording of the Act was brief--just 27 lines. From this modestly written law, an enormous regulatory framework has sprung up, in which the judgments about when scientific data are adequate and sufficient for use in decision making are made by bureaucrats, technocrats and scientists in federal government, affected industries and in some cases environmental activist groups, with the affected communities and workers reduced to the role of a passive audience.

For regulated entities, the Data Quality Act can become a shield from making regulatory decisions using studies that indicate adverse effects, but that might leave some uncertainty in risk assessment, by asserting that such decisions need to be made using the “best” science possible. But that’s a logical fallacy and not a practical approach for acquiring data needed for decision making (the term “best” is a poor definition of a performance standard). The call to use the “best” science and “best” estimates may do little overcome the uncertainties that are inherent in risk assessment, because of the number and nature of bridging assumptions required in the process. Decision making would benefit more from clear, thoughtful statements about the uncertainties involved in a risk assessment and identification of the assumptions that the analyst used to bridge them.

This is clearly a problem, if you are looking at things through a public health lens. What’s to be done?

EPA’s Data Quality Objectives (DQO) process, just as much a part of EPA’s Quality System as the Information Quality Guidelines used by those criticizing regulatory agency science, views determining the quality of scientific information needed as a systematic process to:

[C]larify study objectives, define the appropriate type of data, and specify tolerable levels of potential decision errors that will be used as the basis for establishing the quality and quantity of data needed to support decisions.

DQOs, used within an analytical-deliberative framework in which all of the stakeholders are at the table, represent a way to address the problem of making environmental controversies worse with science. It may be one antidote to manufactured uncertainty.

Jared Diamond makes the point that we, the public, have the responsibility and power to compel industries to achieve higher environmental, health and safety standards (including collecting needed health and safety data), either directly by focusing on critical links in the supply chain or through our politicians. As I’ve noted previously, summoning the will to exert that responsibility may take some doing – James Howard Kunstler notes that many in this country are a bit distracted right now. But we’ve done it in the past. Moral environmentalism (see here and here), the belief arising in the mid-19th century regarding the influence of the environment on the human condition, led to a reform movement to address urban environmental problems such as poor housing and sanitary conditions. Maybe we need a new moral environmentalism movement for toxic substances. These alternatives along with reasoned use of the precautionary principle (which is warranted – if not used with some forethought, it can result in less than optimal decisions), may be able to overcome the influence of manufactured uncertainty.

Wednesday, May 25, 2005

Single Topic Blogs

I’m getting my bearings, after over a month of very long hours at work with lots of travel on top of them, so I’m still not up to the challenge of posting anything that’s too technical. I’m reading the CDC study on body weight and mortality right now (yes, the one that lead David Brooks to conclude that fat is your friend). Stand by to see what leaps of scientific insight brought David to that conclusion. Until then, we’ll have to be contented with introductions to some interesting single-topic blogs. I’ve became more aware of these after encountering the stalwart TCEBlog, which discusses all things concerning the solvent trichloroethylene. Since then, I’ve recently run across:

The Rest of the Story: Tobacco News Analysis and Commentary”. No question about the focus for this one.

Watershed” (with a catchy subtitle: “Whiskey is for drinking, water is for fighting over”). Addresses all things concerning water, soon to be a more important topic than even peak oil.

Sprol, the Planetary Sightseeing Blog”, possibly the most interesting find, providing satellite imagery of distressed urban watersheds, Three Gorges Dam, brownfields and Chernobyl. It’s a little like negative eco-tourism from orbit.

Thanks to Collision Detector for the Watershed link, and to Kittybenders for the introduction to Sprol.

Monday, May 23, 2005

What’s Your Worldview

I’m an existentialist. It figures.

You scored as Existentialist.

Existentialism emphasizes human capability. There is no greater power interfering with life and thus it is up to us to make things happen. Sometimes considered a negative and depressing worldview, your optimism towards human accomplishment is immense. Mankind is condemned to be free and must accept the responsibility.

Existentialist – 94%

Materialist – 94%

Modernist – 53%

Postmodernist – 50%

Cultural Creative – 50%

Idealist – 31%

Fundamentalist – 13%

Take the quiz here.

Thanks to Majikthise for the link.

Sunday, May 22, 2005

Getting More Than You Expected at the Beach

A day at the beach can net you more than fun and mild sunburn. You can also come home with gastroenteritis, swimmer’s ear and conjunctivitis. Earlier this month, researchers from UC Riverside and UC Irvine published a study calculating the health care burden associated with swimming in polluted waters at two popular Southern California beaches. The study is forthcoming in the Journal of Environmental Management (see the “Articles in Press” link).

Their analysis indicates that $3.3 million per year is spent on health-related expenses, based on lost wages and medical care, to treat more than 74,000 incidents of stomach illness, respiratory disease and eye, ear and skin infections caused by exposure to the polluted waters in a typical year. Additional costs for purchases of over-the-counter medicine and losses to the local tourism industry (“surf Newport Beach – get the crud” really brings people out) were not included in the analysis, so the $3.3 million figure is undoubtedly on the low side.

An interesting aside was that both beaches had water quality well within accepted levels, as defined by EPA and the state of California standards. The study also noted that bacteria levels in these coastal waters that achieved water quality standards would be associated with health cost greater than $7 million per year.

“The ultimate value of this research is for policymakers, who are well aware of the substantial costs involved with cleaning up water pollution, but need to know the other side of the equation – the costs associated with not cleaning up the water,” said UC Irvine’s Ryan Dwight, one of the authors of the study.

Of course there’s a problem here, if the costs are not calculated properly. The costs for adequate water treatment are going to be well in excess of $3.3 million (this probably wouldn’t cover the design costs for new wastewater treatment plants). This doesn’t take into account that a decision maker could look at these data and conclude that the disease burden, on a cost basis, is acceptable. It could lead a decision maker to the conclusion that additional water treatment isn’t warranted. I can hear it now, “we’re talking about the runs and earaches, not cancer and birth defects!”

Risk-based analyses such as this one are prone to be misused, if all stakeholders are not at the table when making the decisions. It’s possible that in the larger scheme of things, all parties involved (local governments, state governments, tourist-related industries, hospitals and public health officials, beach-front residents, beach users and surfers), might concur that 74,000 disease episodes and $3.3 million/year in health care costs represent acceptable risks associated with a day at the beach under the current water quality standards. But that has to be a group decision. Once again, I’ll say that we need to be absorbing the lessons from books such as “Understanding Risk: Informing Decisions in a Democratic Society” published in 1996 by the National Academy of Sciences (NAS), to anchor risk and impact analyses within democratic institutions.

The researchers ended their work with a call for further studies to fully understand the economic impact of coastal water pollution on tourism, recreational values and other related factors. Of course they did. That’s what researchers do. However, developing the total cost analysis associated with coastal water quality is merely a detail. The real work, where the research is really needed, is how to better integrate this kind information into decision making processes.

Lovers’ Tiff

It is heartening to hear that the American buying public’s love affair with the SUV is beginning to cool. This is the pronouncement of the New York Times. Since the Times is rarely, if ever, out in front of such issues, this must indeed be conventional wisdom. Now we’ll really be making some progress when we hear the Times pronounce that Americans' love affair with their bicycles is heating up.